|
Cover's Theorem is a statement in computational learning theory and is one of the primary theoretical motivations for the use of non-linear kernel methods in machine learning applications. The theorem states that given a set of training data that is not linearly separable, one can with high probability transform it into a training set that is linearly separable by projecting it into a higher-dimensional space via some non-linear transformation. The proof is easy. A deterministic mapping may be used. Indeed, suppose there are samples. Lift them onto the vertices of the simplex in the dimensional real space. Every partition of the samples into two sets is separable by a linear separator. QED. ==References== * * * Mehrotra, K., Mohan, C.K., Ranka, S. (1997) ''Elements of artificial neural networks'', 2nd edition. MIT Press. (Section 3.5) ISBN 0-262-13328-8 (Google books ) 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Cover's theorem」の詳細全文を読む スポンサード リンク
|